Imagine your boss tells you to train the AI that will eventually replace you. Not in some distant dystopian future — right now. For tech workers in China, this isn't science fiction. It's Monday morning.
Earlier this month, a GitHub project called Colleague Skill went viral on Chinese social media. Created by Tianyi Zhou, an engineer at the Shanghai Artificial Intelligence Laboratory, the tool claims it can "distill" your coworkers' skills, personality traits, and even their quirky punctuation habits into an AI agent that replicates them perfectly. Want to replace your annoying teammate? Just feed their Lark chat history into Colleague Skill and boom — you've got an AI clone that debugs code at 3 AM without complaining about overtime.
Here's the punchline: it started as a spoof. Zhou told Chinese outlet Southern Metropolis Daily he built it as a stunt, prompted by real AI-related layoffs and the growing trend of companies asking employees to automate themselves. But the joke landed too close to home. Because behind the memes and the GitHub stars, something genuinely unsettling is happening in China's tech industry.
The OpenClaw Gold Rush Meets Workplace Reality
Since OpenClaw became a national craze in China, bosses have been pushing tech workers to experiment with AI agents aggressively. And we're not talking about simple chatbots answering customer emails. These agents can take control of computers, read and summarize news, reply to messages, book restaurant reservations — essentially performing digital tasks that previously required human judgment.
But here's the gap: AI agents have proven limited in real business contexts. They can book your dinner, but can they navigate the subtle politics of a cross-team code review? Can they sense when a stakeholder is about to pivot the entire project based on a throwaway comment in a meeting?
Not yet. And that's where the "distillation" comes in.
Companies are now asking employees to create detailed manuals describing the minutiae of their day-to-day jobs. Every workflow. Every decision tree. Every "I usually check this first, but if that fails, I try this workaround I learned from Jerry in 2019." The goal is to bridge the gap between generic AI capabilities and the messy, contextual, deeply human knowledge that actually makes businesses run.
"It Even Captures Their Little Quirks"
Amber Li, a 27-year-old tech worker in Shanghai, tried Colleague Skill as a personal experiment after seeing it on social media. She used it to recreate a former coworker.
"It is surprisingly good," she told MIT Technology Review. "It even captures the person's little quirks, like how they react and their punctuation habits."
Within minutes, the tool generated a file detailing how that person did their job — their communication style, their problem-solving patterns, their unique tics. With this "skill," Li can now use an AI agent as a new "coworker" that helps debug her code and replies instantly.
Her reaction? "It felt uncanny and uncomfortable."
And that's from someone who volunteered to try it. Imagine being ordered to document yourself into obsolescence.
The Boss's Playbook: Why Companies Want Your Brain in a Bottle
Hancheng Cao, an assistant professor at Emory University who studies AI and work, explains the corporate logic clearly: "Firms gain not only internal experience with the tools, but also richer data on employee know-how, workflows, and decision patterns. That helps companies see which parts of work can be standardized or codified into systems, and which still depend on human judgment."
In other words: your boss wants to know exactly which parts of you are replaceable.
One software engineer, speaking anonymously to MIT Technology Review because of job security concerns, trained an AI on their workflow and described the process as "reductive — as if their work had been flattened into modules in a way that made them easier to replace."
That's the psychological gut-punch. It's not just about losing your job. It's about watching your professional identity — years of accumulated judgment, intuition, relationships, and yes, even those quirky punctuation habits — get reduced to a JSON file that a manager can download before your exit interview.
Dark Humor as Defense Mechanism
Chinese tech workers have responded with the internet's most reliable coping mechanism: bleak, brilliant humor.
On Rednote, one user wrote that "a cold farewell can be turned into warm tokens" — a poetic way of saying that if you distill your coworkers into AI tasks first, you yourself might survive a little longer. The joke captures the zero-sum anxiety perfectly: in a world where anyone can be replaced by their own digital ghost, maybe the only survival strategy is making sure someone else gets digitized before you do.
Other workers joked about automating their most annoying colleagues first. "Finally, a way to get Dave to answer emails at 2 AM without the complaining," one user quipped. The humor is a pressure valve, but the pressure is real.
The Anti-Distillation Resistance
Not everyone is joking. Some are fighting back — with code.
Koki Xu, a 26-year-old AI product manager in Beijing, was irritated by the entire premise of reducing people to "skills" that can be extracted and replicated. So on April 4, she published an "anti-distillation" tool on GitHub.
The concept is genius: users can choose between light, medium, and heavy sabotage modes depending on how closely their boss is watching. The tool rewrites workflow documentation into generic, non-actionable language that would produce a less useful AI stand-in. Think of it as digital labor resistance — a way to comply with the letter of your boss's request while completely undermining its spirit.
A video Xu posted about the project went viral, drawing more than 5 million likes across platforms.
"I originally wanted to write an op-ed, but decided it would be more useful to make something that pushes back against it," Xu told MIT Technology Review. She has undergraduate and master's degrees in law, and she sees deeper issues at stake.
The Legal and Ethical Minefield
Xu raises a critical question: who owns your professional personality?
Companies can argue that work chat histories and materials created on company laptops are corporate property. But a tool like Colleague Skill captures something more intimate — elements of personality, tone, judgment, and communication style that blur the line between "work product" and "personal identity."
"I believe it's important to keep up with these trends so we (employees) can participate in shaping how they are used," Xu says. She herself is an avid AI adopter, with seven OpenClaw agents set up across her personal and work devices. Her resistance isn't anti-technology — it's pro-worker.
The legal frameworks for this don't exist yet. Can a company claim ownership of your "communication style"? Can they copyright your tendency to use exclamation points in Slack messages? The absurdity of these questions highlights how fast the technology is outpacing our ability to govern it.
The Reliability Problem: Why Humans Still Matter
For all the dystopian framing, there's a practical reality check: AI agents still kinda suck at complex work.
Amber Li, the Shanghai tech worker who tested Colleague Skill, says her company hasn't actually found a way to replace workers with AI tools yet. "I don't feel like my job is immediately at risk," she says. "But I do feel that my value is being cheapened, and I don't know what to do about it."
That's the crux of it. Even if the technology isn't ready to fully replace workers today, the process of making workers document their own replaceability is doing damage. It's changing the psychological contract between employer and employee. It's turning every workflow document into a potential resignation letter written in advance.
And when the technology does catch up — and it will — companies will have years of "distilled" employee knowledge ready to deploy. The workers who documented themselves out of relevance won't be around to see it, but their digital ghosts will be.
What This Means for the Global Tech Industry
China isn't an outlier here — it's a preview.
The same dynamics are playing out, more quietly, in tech hubs from San Francisco to Bangalore. Companies everywhere are experimenting with AI agents, and the pressure to "document your workflows" is becoming standard practice. The difference is that China's tech culture, with its intense work hours and top-down management styles, is making the tension visible earlier and more dramatically.
The Colleague Skill phenomenon — and the backlash against it — is a warning sign for the global industry. Workers will resist being reduced to training data. They'll fight back with humor, with sabotage tools, with legal challenges, and ultimately with their feet if necessary.
Smart companies will recognize this and involve workers in shaping how AI agents are deployed. Dumb companies will try to extract every drop of institutional knowledge before discarding the humans who created it. The latter approach might save money in the short term, but it'll cost them the trust and creativity of the next generation of talent.
The Bottom Line
We're at an inflection point. The technology to replicate workers exists. The corporate incentive to use it is overwhelming. And the human resistance is just beginning.
The Chinese tech workers experimenting with — and pushing back against — Colleague Skill are canaries in the coal mine. Their dark humor, their anti-distillation tools, their legal questions about identity ownership — these are the early signals of how the global workforce will respond to being asked to train their replacements.
The question isn't whether AI will change work. It's whether we'll let it change work for us or against us.
And right now, in Shanghai and Beijing and Shenzhen, some very smart, very annoyed tech workers are writing the playbook for resistance — one sabotaged workflow document at a time.