Training Your Replacement: How Chinese Tech Workers Are Being Asked to Build Their AI Doubles — and Fighting Back

Date:

Training Your Replacement: How Chinese Tech Workers Are Being Asked to Build Their AI Doubles — and Fighting Back

In a glass-walled meeting room in a Beijing office, a product manager named Li spent an afternoon role-playing her job with a laptop open beside her. She recited the steps she would take to triage a bug report, explained the tradeoffs behind a design decision, and answered hypothetical customer questions. The company’s goal was explicit: generate a detailed dataset that could teach an internal conversational agent to perform this same human work.

Li’s story is not unusual. Across China’s technology sector, a growing number of employees — from customer service representatives to engineers, moderators to account managers — have been asked to help train AI models that emulate their knowledge, tone and judgment. The companies framing these requests say the effort will make internal tools smarter, speed onboarding and free employees for higher-value work. For many staff, however, the work feels like building a digital mirror that could one day be used to replace them.

The new workplace choreography

“AI double” is shorthand now: an agent trained on the recorded inputs, workflows and conversations of an employee so it can act in the same role. The practice often looks mundane: employees answer scripted prompts; document standard operating procedures in granular detail; annotate customer interactions; or run through simulated conversations while the company logs and curates the responses.

Technically, these projects sit at the intersection of knowledge engineering, fine-tuning and retrieval-augmented generation. They rely on collecting large volumes of job-specific text, private notes and decision rationales. In return, companies promise better search, automated drafting, faster responses and fewer routine tasks. For organizations scaling across languages, regions and product lines, the allure is obvious: an army of AI agents that carry institutional knowledge without all the coordination overhead of human teams.

Where enthusiasm meets unease

But there is a tension beneath the efficiency pitch. Employees describe three overlapping anxieties.

  • Replacement risk: If an AI can handle common tasks and respond to customers in the same voice, what happens to the original job? For workers who already feel overworked or exposed to performance evaluations tied to productivity metrics, the prospect of being replaced by a trained agent is destabilizing.
  • Intellectual property and control: Many workers worry that the ideas, heuristics and customer relationships they poured into training datasets become property of the company. Once encoded in a model, these contributions can be copied, repurposed or commercialized in ways the employee did not anticipate.
  • Consent and transparency: Employees report being asked to contribute without clear answers about how the data will be used, who will access it, whether it will leave the company or how long a “digital double” will be operational.

These concerns have prompted a range of responses: some workers quietly refuse to participate; others file internal complaints; a few seek legal counsel or collaborate with coworkers to demand guardrails. The pushback is as hectic as the promise of automation — and it raises broader questions about how workplaces will govern the making of machine replicas of their labor.

Ethics, career development and blurred boundaries

The ethics of building AI doubles touches on familiar themes — consent, fairness, accountability — but in a new configuration. When the person who writes the playbook is asked to build the player, boundaries erode. The move reframes knowledge work as a raw material that companies can repackage into software. That shift creates cascading effects on career paths.

Consider an engineer who documents a troubleshooting checklist. That checklist helps junior colleagues learn faster. If it also becomes the basis for a support agent that handles 80% of incidents, the organization may see less need to hire or promote junior staff. Promotion criteria, once tied to mentorship or problem-solving, may pivot toward oversight of AI systems rather than hands-on expertise. Skills that were once signals of mastery — customer empathy, tacit heuristics, situational judgment — risk being extracted and commodified.

There are also psychological costs. Workers report feeling depersonalized when asked to act as the raw material for a model. The process can make tacit knowledge suddenly explicit and brittle, reduce pride in craft, and create moral friction when employees imagine their own words being used without safeguards. For front-line workers whose relationships with customers define their role, the idea of a faceless agent taking over those interactions is especially fraught.

Questions of fairness and legal gaps

Current labor laws and data-protection frameworks were not designed for this scenario. Where does an employee’s work end and the company’s intellectual property begin? If an AI trained on an individual’s interactions makes a harmful decision, who is responsible? How should workers be compensated for providing the training data that creates commercial value?

In many cases, agreements signed at hiring or during periodic policy updates give companies broad rights to employee-generated content. But consent that is coerced — whether through implied pressure or economic necessity — is not true choice. When a directive comes from management with a clear operational imperative (“all teams must contribute to model training”), opting out can feel impossible.

Paths of resistance and negotiation

Workers are not passively accepting this transformation. Pushback has taken multiple forms:

  • Refusal: individuals decline to participate or withhold certain materials they deem sensitive or personal.
  • Collective action: employees coordinate to set terms, demand transparency or negotiate for compensation and protections.
  • Visibility: public accounts, internal letters and whistleblowing have placed pressure on companies to explain their plans and safeguards.
  • Legal inquiry: some workers pursue legal pathways to clarify rights over their contributions and the use of their data.

These responses show a growing labor literacy around algorithmic workplace change. They also highlight the need for meaningful governance that recognizes employees as stakeholders in the design and deployment of AI systems that replicate their work.

Designing AI doubles that respect people

It would be easy to frame this debate as zero-sum: either companies build AI and eliminate jobs, or workers reject automation and block progress. The reality can be more nuanced. The choice firms face is not whether to use AI — it is how to integrate it in ways that preserve human dignity, knowledge ownership and fair transitions.

A set of practical safeguards can make a difference:

  • Informed, revocable consent: Employees should receive clear, role-specific explanations of how their contributions will be used, for how long, and with what downstream implications. Consent should not be a one-time checkbox; workers need the option to withdraw participation without career penalty.
  • Compensation and benefit-sharing: When employee data generates commercial value, mechanisms for compensation — from bonuses to revenue-sharing arrangements — can align incentives and acknowledge labor’s contribution to model value.
  • Co-ownership and licensing: Structures that allow employees to retain partial rights to their knowledge artifacts or to license them under defined terms would rebalance power and reduce the feeling that one’s craft was expropriated.
  • Transparency and auditability: Companies should document the provenance of training data, allow workers to review what personal or proprietary material was included, and provide recourse to remove sensitive items.
  • Human oversight rules: Limits on deploying AI doubles without human-in-the-loop checks for critical decisions, especially in customer-facing and safety-sensitive contexts, preserve accountability and reduce harm from model errors.
  • Sunset clauses and redeployment commitments: If an AI reduces headcount for a given task, firms should commit to retraining, redeployment or transition support for affected employees rather than immediate layoffs.

Designing for augmentation, not erasure

Reimagining AI doubles as augmentations rather than replacements requires cultural and design choices. Companies can configure agents to be explicit about their provenance, signpost when a customer is interacting with an AI, and design workflows where the human remains the final arbiter. A “double” can be a drafting assistant that surfaces options, a knowledge search that accelerates problem-solving, or a scheduling aid that frees time for strategic work — rather than a fully autonomous substitute.

There is also room for creative, worker-friendly models. Imagine employee-curated knowledge repositories that are licensed to the company under fair terms, or internal marketplaces where employees can opt in to share deep expertise for premium compensation. These approaches recast the worker from an input into a stakeholder in value creation.

Policy and governance: what governments can do

Regulators and policymakers will play a critical role in setting the baseline protections for this new frontier. Practical steps include:

  • Mandating disclosure when companies use employee-derived data to train models that could replace jobs.
  • Creating standards for informed consent and revocation in workplace data collection.
  • Requiring impact assessments that measure displacement risk and outline mitigation strategies before deployment.
  • Establishing portability rights so workers can transfer their professional data across employers or reclaim it after leaving.
  • Updating labor law to recognize contributions to AI training as labor with potential entitlement to compensation or protections.

Such rules would not outlaw AI innovation. Instead, they would set guardrails that shape healthier transitions — enabling companies to realize efficiency gains while protecting workers’ rights and livelihoods.

Global reverberations

China is not alone in this dynamic. Multinational firms, startups and public-sector organizations globally are experimenting with staff-derived AI agents. The governance choices made in one market ripple outward through supply chains, platform integrations and shared technical standards. If permissive practices proliferate — where companies freely extract and commercialize employee knowledge — we may see a global reconfiguration of knowledge labor markets.

Conversely, if workers and regulators insist on stronger protections, new norms could emerge that emphasize shared ownership, transparency and human-centered augmentation. Those norms would reshape how AI is trained, monetized and governed around the world.

A call to shape the future

At its best, technology amplifies human capacities. At its worst, it extracts value without regard for the people who created it. The story unfolding in Chinese tech firms is a test case for which path we choose. Workers pushing back against the creation of their AI doubles are not Luddite holdouts; they are stakeholders demanding a seat at the table where the rules of automation are being written.

Practical, enforceable frameworks, created through negotiation among companies, workers and policymakers, can make AI deployment more just and sustainable. Designing with workers — not merely around them — will yield systems that are more robust, humane and ultimately more valuable. That requires clear consent, fair compensation, meaningful oversight and a cultural commitment to augmentation over erasure.

Training an AI double does not have to mean training your replacement. It can mean handing a tool to a worker that does the repetitive so the human can do the creative, the relational and the judgmental. But whether that promise becomes reality depends on choices made today — choices about transparency, ownership, and who gets to define the purpose of these machines.

The future of work is not preordained. It will be shaped by the conversations we have now about the rights of those who teach our machines to be us. That debate is already happening — in office meetings, in collective bargaining conversations, and in the quiet refusals that ripple through teams. Listen to those voices. Design systems with them. The path we choose will determine whether AI becomes a companion in human flourishing or a mirror that returns a future without the people who made it possible.

Further reading and resources

For those following this evolving story, tracking company policies on workplace data use, reading internal worker letters, and monitoring regulatory proposals will reveal how these debates take institutional form. The decisions firms make now will ripple far beyond any single office.

Noah Reed
Noah Reedhttp://theailedger.com/
AI Productivity Guru - Noah Reed simplifies AI for everyday use, offering practical tips and tools to help you stay productive and ahead in a tech-driven world. Relatable, practical, focused on everyday AI tools and techniques. The practical advisor showing readers how AI can enhance their workflows and productivity.

Share post:

Subscribe

WorkCongress2025WorkCongress2025

Popular

More like this
Related