Replaced by Models: A Kingdom Come Translator’s Claim and What It Reveals About AI in Game Localization
When a Czech-to-English translator who spent years shaping voice, idiom and cultural texture for a major upcoming title says he was let go because an AI system can now do his job, the story is more than an individual grievance. It is a flashpoint that illuminates how generative models, machine translation and automated tooling are reconfiguring labor, quality and cultural stewardship in an industry that depends on human nuance: game localization.
The claim and why it matters
The translator at the center of this conversation alleges that the studio behind Kingdom Come: Deliverance 2 replaced parts of the localization team with AI-driven processes. He says he was told his services were no longer required after internal tests with new tooling. The company has not offered detailed public confirmation of that exact sequence, but the claim echoes a pattern: studios piloting or adopting automated translation and text-generation systems to lower costs and accelerate schedules.
Why does this matter beyond one person’s livelihood? Because localization is not just a mechanical conversion of strings. It is a craft that mediates cultural context, character voice, era-appropriate diction, humor, and narrative stakes. Games like Kingdom Come are narrative-first, historically flavored, and heavily contextual. The way a character swears, endears, or insults in Czech carries centuries of nuance that must be rendered into English without flattening personality or rupturing immersion. When players notice slippage in voice, the damage is to artistic integrity and to player trust — and those are real, long-term business risks.
What automated tools are actually doing
Over the past five years, the localization workflow has been invaded (in the neutral sense) by a toolbox of automated capabilities: large language models that can draft translations and adapt tone, neural machine translation systems specialized for gaming glossaries, and post-edit assistants that propose phrasing changes. These systems are fast, inexpensive at scale, and increasingly competent at producing fluent, idiomatic text for generic material.
That competence can be seductive for studios facing tight deadlines and swelling text loads. Where a human team once turned a month-long translation sprint into a polished set of localized scripts, a model can produce a usable first draft in minutes. The immediate business math is straightforward: fewer billable hours, faster deliverables, and lower upfront costs.
Where automation falls short
But speed and fluency are not the whole story. There are several enduring gaps between automated output and what human practitioners deliver:
- Contextual anchoring: A human translator who has played the game or worked closely with narrative leads can anchor translation choices to story beats, mechanics, and lore. Models process text in isolation unless they are augmented with detailed game context.
- Character consistency: Maintaining a character’s distinct voice across hundreds of lines requires memory of prior choices, punchlines and inflection. Models can mimic voice locally but struggle with long-term consistency unless tightly supervised.
- Humor and cultural references: Jokes, puns, and culture-specific signals often require creative re-writing, not literal translation. Human translators invent localized solutions that preserve intent; models may produce literal or bland substitutes.
- Ethical and legal judgment: Localization choices can implicate cultural sensitivities, legal content ratings, and regional norms. Humans apply judgment; automated systems act probabilistically.
- Quality handoff and iteration: Human translators frequently iterate with writers and voice directors in a back-and-forth that refines nuance. Replacing that collaborative loop with asynchronous automation risks misinterpretation.
Economic and human consequences
The case in question underscores several broader consequences for professionals in localization and related fields.
- Job displacement and precarity: As studios test models, freelance windows shrink and companies reduce headcount. Contractors — who form a substantial portion of localization work — are especially vulnerable because they have limited recourse when a single client automates part of the workflow.
- Devaluation of craft: When automated outputs are accepted without rigorous human post-editing, the market begins to equate faster and cheaper with acceptable. Over time, that devalues the skill set of seasoned translators and their ability to negotiate fair compensation.
- Concentration of oversight: Responsibility for translation choices can consolidate in the hands of a smaller group of managers or engineers who may lack domain-specific expertise in linguistics or cultural adaptation.
- Homogenization of voices: If models trained on pooled datasets become the default, localized dialogue and character speech risk sounding increasingly uniform across titles and studios, eroding diversity of expression.
Quality, accountability and transparency
If automation becomes part of localization pipelines, studios and the wider industry need guardrails to preserve quality and accountability. That starts with transparency: teams should document when and where models were used, what prompts or datasets informed their output, and how human oversight was applied. Transparency allows downstream testers, community members, and localizers to audit and assess the fidelity of localized content.
Equally important is process design. Human-in-the-loop workflows, where models generate candidate text and trained translators post-edit and sign off, can capture efficiency gains while preserving craft. But that model depends on fair compensation for the post-editing work, not on the expectation that humans will fix model flaws for free. Contracts and procurement practices should reflect the changed nature of labor: the work of reviewing and shaping model output is real, skilled work that carries responsibility and should be budgeted accordingly.
Creative roles that can survive and thrive
Automation does not necessarily mean the end of human involvement. The industry has room for evolving roles that emphasize higher-level judgment and creative curation:
- Localization narrative designers who ensure character arcs and cultural coherence across languages.
- Post-editors focused on voice and continuity rather than line-by-line proofreading.
- Tooling specialists who design prompts, glossaries, and dataset curation to steer models toward desirable outputs.
- Quality assurance leads who integrate linguistic testing with gameplay testing to capture contextual failures.
These roles demand a different mix of skills: not only translation fluency, but also design thinking, tooling literacy and the ability to arbitrate between creative intent and technical constraints.
Policy, procurement and community pressure
Studios and publishers will make decisions primarily on cost, schedule and risk. That means change is most likely to come from policy interventions and community pressure, not moral suasion alone. Several practical steps could help rebalance incentives:
- Contractual standards that define acceptable use of automated tools, require disclosure when they are used, and ensure fair compensation for post-editing and review.
- Industry glossaries and style guides that are open and machine-readable, so automated tools operate from shared, vetted foundations.
- Audit trails recording whether a line originated from human translation, machine translation, or a hybrid process — valuable for quality assurance and accountability.
- Community transparency where studios share the extent and purpose of AI use so players and localization professionals can respond constructively.
What the AI news community should watch and demand
For journalists and writers covering AI, this story is a microcosm of larger questions: who benefits from automation, who loses, and how do we measure the loss of human skill in cultural products? Coverage should probe beyond sensational headlines and examine contractual terms, procurement patterns, quality metrics, and the lived experiences of translators and localizers.
When a translator says a tool cost him his job, reporters should ask studios to explain how automated outputs were validated, how human roles were redefined and how affected workers were compensated or transitioned. They should also surface examples where automation enriched the workflow without eroding human roles, to distinguish responsible adoption from blunt replacement.
An appeal to balance
The impulse to automate is inevitable; the question is how we choose to automate. Will the industry use models to amplify human creativity, handling repetitive tasks while freeing translators to focus on high-value cultural work? Or will it use these tools primarily to compress labor costs and shrink opportunities?
There is room for optimism. Translators who adapt — learning post-editing, tooling and narrative design — can carve roles that are richer and more strategic. Studios that invest in human oversight and fair labor practices can benefit from cost efficiencies while preserving the integrity of their games. And the AI tools themselves can improve if developers and localizers collaborate on domain-specific datasets, glossaries and evaluation metrics that reward nuance as much as fluency.
Conclusion: a call to steward the craft
The story of a single translator who claims he was replaced on a marquee title is a cautionary tale and a wake-up call. It asks us to consider what we value in cultural production and how we will govern the technologies that touch it. As models proliferate, the stewardship of language — in games, films and books — becomes a collective responsibility: for studios to be transparent and fair, for communities to demand quality, and for professionals to evolve their craft.
If we do that work thoughtfully, automation can be a tool that expands creative possibility rather than a force that dispossesses the human hands and voices that make games worth caring about. That is the future worth fighting for.

